251 research outputs found
An Analysis of the Search Spaces for Generate and Validate Patch Generation Systems
We present the first systematic analysis of the characteristics of patch
search spaces for automatic patch generation systems. We analyze the search
spaces of two current state-of-the-art systems, SPR and Prophet, with 16
different search space configurations. Our results are derived from an analysis
of 1104 different search spaces and 768 patch generation executions. Together
these experiments consumed over 9000 hours of CPU time on Amazon EC2.
The analysis shows that 1) correct patches are sparse in the search spaces
(typically at most one correct patch per search space per defect), 2) incorrect
patches that nevertheless pass all of the test cases in the validation test
suite are typically orders of magnitude more abundant, and 3) leveraging
information other than the test suite is therefore critical for enabling the
system to successfully isolate correct patches.
We also characterize a key tradeoff in the structure of the search spaces.
Larger and richer search spaces that contain correct patches for more defects
can actually cause systems to find fewer, not more, correct patches. We
identify two reasons for this phenomenon: 1) increased validation times because
of the presence of more candidate patches and 2) more incorrect patches that
pass the test suite and block the discovery of correct patches. These
fundamental properties, which are all characterized for the first time in this
paper, help explain why past systems often fail to generate correct patches and
help identify challenges, opportunities, and productive future directions for
the field
Probabilistic and Statistical Analysis of Perforated Patterns
We present a new foundation for the analysis and transformation of computer programs.Standard approaches involve the use of logical reasoning to prove that the applied transformation does not change the observable semantics of the program. Our approach, in contrast, uses probabilistic and statistical reasoning to justify the application of transformations that may change, within probabilistic bounds, the result that the program produces. Loop perforation transforms loops to execute fewer iterations. We show how to use our basic approach to justify the application of loop perforation to a set of computational patterns. Empirical results from computations drawn from the PARSEC benchmark suite demonstrate that these computational patterns occur in practice. We also outline a specification methodology that enables the transformation of subcomputations and discuss how to automate the approach
Recommended from our members
Error-efficient computing systems
This survey explores the theory and practice of techniques to make computing systems faster or more energy-efficient by allowing them to make controlled errors. In the same way that systems which only use as much energy as necessary are referred to as being energy-efficient, you can think of the class of systems addressed by this survey as being error-efficient: They only prevent as many errors as they need to. The definition of what constitutes an error varies across the parts of a system. And the errors which are acceptable depend on the application at hand. In computing systems, making errors, when behaving correctly would be too expensive, can conserve resources. The resources conserved may be time: By making some errors, systems may be faster. The resource may also be energy: A system may use less power from its batteries or from the electrical grid by only avoiding certain errors while tolerating benign errors that are associated with reduced power consumption. The resource in question may be an even more abstract quantity such as consistency of ordering of the outputs of a system. This survey is for anyone interested in an end-to-end view of one set of techniques that address the theory and practice of making computing systems more efficient by trading errors for improved efficiency
Recommended from our members
Perceived-Color Approximation Transforms for Programs that Draw
© 1981-2012 IEEE. Human color perception acuity is not uniform across colors. This makes it possible to transform drawing programs to generate outputs whose colors are perceptually equivalent but numerically distinct. One benet of such transformations is lower display power dissipation on organic light-emitting diode (OLED) displays. We introduce Ishihara, a language for 2D drawing that lets programs specify perceptual-color equivalence classes to use in drawing operations enabling compile-time and runtime transformations that trade perceived color accuracy for lower OLED display power dissipation
Efficiency Limits for Value-Deviation-Bounded Approximate Communication
Transferring data between integrated circuits accounts for a growing proportion of system power in wearable and mobile systems. The dynamic component of power dissipated in this data transfer can be reduced by reducing signal transitions. Techniques for reducing signal transitions on communication links have traditionally been targeted at parallel buses and can therefore not be applied when the transfer interfaces are serial buses. In this article, we address the issue of the best-case effectiveness of techniques to reduce signal transitions on serial buses, if these techniques also allow some error in the numeric interpretation of transmitted data. For many embedded applications, exchanging numeric accuracy for power reduction is a worthwhile tradeoff. We present a study of the efficiency of these value-deviation-bounded approximate serial data encoders (VDBS data encoders) and proofs of their properties. The bounds and proofs we present yield new insights into the best possible tradeoffs between dynamic power reduction and approximation error that can be achieved in practice. The insights are important regardless of whether actual practical VDBS data encoders are implemented in software or in hardware
Coverage of Megaprosthesis with Human Acellular Dermal Matrix after Ewing's Sarcoma Resection: A Case Report
A 23-year-old female with Ewing's Sarcoma underwent tibial resection and skeletal reconstruction using proximal tibial allograft prosthetic reconstruction with distal femur endoprosthetic reconstruction and rotating hinge. Human acellular dermal matrix, (Alloderm, LifeCell, Branchburg, NJ, USA), was used to wrap the skeletal reconstruction. Soft tissue reconstruction was completed with a rotational gastrocnemius muscle flap and skin graft. Despite prolonged immobilization, the patient quickly regained full range of motion of her skeletal reconstruction. Synthetic mesh, tapes and tubes are used to perform capsule reconstruction of megaprosthesis. This paper describes the role of human acellular dermal matrix in capsule reconstruction around a megaprosthesis
An Analysis of the Search Spaces for Generate and Validate Patch Generation Systems
We present the first systematic analysis of the characteristics of patch search spaces for automatic patch generation systems. We analyze the search spaces of two current state-of- the-art systems, SPR and Prophet, with 16 different search space configurations. Our results are derived from an analysis of 1104 different search spaces and 768 patch generation executions. Together these experiments consumed over 9000 hours of CPU time on Amazon EC2.The analysis shows that 1) correct patches are sparse in the search spaces (typically at most one correct patch per search space per defect), 2) incorrect patches that nevertheless pass all of the test cases in the validation test suite are typically orders of magnitude more abundant, and 3) leveraging information other than the test suite is therefore critical for enabling the system to successfully isolate correct patches.We also characterize a key tradeoff in the structure of the search spaces. Larger and richer search spaces that contain correct patches for more defects can actually cause systems to find fewer, not more, correct patches. We identify two reasons for this phenomenon: 1) increased validation times because of the presence of more candidate patches and 2) more incorrect patches that pass the test suite and block the discovery of correct patches. These fundamental properties, which are all characterized for the first time in this paper, help explain why past systems often fail to generate correct patches and help identify challenges, opportunities, and productive future directions for the field
Proving acceptability properties of relaxed nondeterministic approximate programs
Approximate program transformations such as skipping tasks [29, 30], loop perforation [21, 22, 35], reduction sampling [38], multiple selectable implementations [3, 4, 16, 38], dynamic knobs [16], synchronization elimination [20, 32], approximate function memoization [11],and approximate data types [34] produce programs that can execute at a variety of points in an underlying performance versus accuracy tradeoff space. These transformed programs have the ability to trade accuracy of their results for increased performance by dynamically and nondeterministically modifying variables that control their execution.
We call such transformed programs relaxed programs because they have been extended with additional nondeterminism to relax their semantics and enable greater flexibility in their execution.
We present language constructs for developing and specifying relaxed programs. We also present proof rules for reasoning about properties [28] which the program must satisfy to be acceptable. Our proof rules work with two kinds of acceptability properties: acceptability properties [28], which characterize desired relationships between the values of variables in the original and relaxed programs, and unary acceptability properties, which involve values only from a single (original or relaxed) program. The proof rules support a staged reasoning approach in which the majority of the reasoning effort works with the original program. Exploiting the common structure that the original and relaxed programs share, relational reasoning transfers reasoning effort from the original program to prove properties of the relaxed program.
We have formalized the dynamic semantics of our target programming language and the proof rules in Coq and verified that the proof rules are sound with respect to the dynamic semantics. Our Coq implementation enables developers to obtain fully machine-checked verifications of their relaxed programs.National Science Foundation (U.S.). (Grant number CCF-0811397)National Science Foundation (U.S.). (Grant number CCF-0905244)National Science Foundation (U.S.). (Grant number CCF-1036241)National Science Foundation (U.S.). (Grant number IIS-0835652)United States. Defense Advanced Research Projects Agency (Grant number FA8650-11-C-7192)United States. Defense Advanced Research Projects Agency (Grant number FA8750-12-2-0110)United States. Dept. of Energy. (Grant Number DE-SC0005288
Recommended from our members
Implementing a technique to improve the accuracy of shuffler assays of waste drums
The accuracy of shuffler assays for fissile materials is generally limited by the accuracy of the calibration standards, but when the matrix in a large drum has a sufficiently high hydrogen density (as exists in paper, for example) the accuracy in the active mode can be adversely affected by a nonuniform distribution of the fissile material within the matrix. This paper reports on a technique to determine the distribution nondestructively using delayed neutron signals generated by the shuffler itself. In assays employing this technique, correction factors are applied to the result of the conventional assay according to the distribution. Maximum inaccuracies in assays with a drum of paper, for example, are reduced by a factor of two or three
Verifying Quantitative Reliability of Programs That Execute on Unreliable Hardware
Emerging high-performance architectures are anticipated to contain unreliable components that may exhibit soft errors, which silently corrupt the results of computations. Full detection and recovery from soft errors is challenging, expensive, and, for some applications, unnecessary. For example, approximate computing applications (such as multimedia processing, machine learning, and big data analytics) can often naturally tolerate soft errors. In this paper we present Rely, a programming language that enables developers to reason about the quantitative reliability of an application -- namely, the probability that it produces the correct result when executed on unreliable hardware. Rely allows developers to specify the reliability requirements for each value that a function produces. We present a static quantitative reliability analysis that verifies quantitative requirements on the reliability of an application, enabling a developer to perform sound and verified reliability engineering. The analysis takes a Rely program with a reliability specification and a hardware specification, that characterizes the reliability of the underlying hardware components, and verifies that the program satisfies its reliability specification when executed on the underlying unreliable hardware platform. We demonstrate the application of quantitative reliability analysis on six computations implemented in Rely.This research was supported in part by the National Science Foundation (Grants CCF-0905244, CCF-1036241, CCF-1138967, CCF-1138967, and IIS-0835652), the United States Department of Energy (Grant DE-SC0008923), and DARPA (Grants FA8650-11-C-7192, FA8750-12-2-0110)
- …